40 research outputs found
TCGAN: Convolutional Generative Adversarial Network for Time Series Classification and Clustering
Recent works have demonstrated the superiority of supervised Convolutional
Neural Networks (CNNs) in learning hierarchical representations from time
series data for successful classification. These methods require sufficiently
large labeled data for stable learning, however acquiring high-quality labeled
time series data can be costly and potentially infeasible. Generative
Adversarial Networks (GANs) have achieved great success in enhancing
unsupervised and semi-supervised learning. Nonetheless, to our best knowledge,
it remains unclear how effectively GANs can serve as a general-purpose solution
to learn representations for time series recognition, i.e., classification and
clustering. The above considerations inspire us to introduce a Time-series
Convolutional GAN (TCGAN). TCGAN learns by playing an adversarial game between
two one-dimensional CNNs (i.e., a generator and a discriminator) in the absence
of label information. Parts of the trained TCGAN are then reused to construct a
representation encoder to empower linear recognition methods. We conducted
comprehensive experiments on synthetic and real-world datasets. The results
demonstrate that TCGAN is faster and more accurate than existing time-series
GANs. The learned representations enable simple classification and clustering
methods to achieve superior and stable performance. Furthermore, TCGAN retains
high efficacy in scenarios with few-labeled and imbalanced-labeled data. Our
work provides a promising path to effectively utilize abundant unlabeled time
series data
Fast approximately timed simulation
International audienceIn this paper we present a technique for fast approximately timed simulation of software within a virtual prototyping framework. Our method performs a static analysis of the program control flow graph to construct annotations of the simulated program, combined with dynamic performance information. The static analysis estimates execution time based on a target architecture model. The delays introduced by instruction fetch and data cache misses are evaluated dynamically. At the end of each block, static and dynamic information are combined with branch target prediction to compute the total execution time of the blocks. As a result, we can provide approximate performance estimates with a high simulation speed that is still usable for software developers
Time-delay neural network for continuous emotional dimension prediction from facial expression sequences
"(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a Time-Delay Neural Network (TDNN) is proposed to model the temporal relationships between
consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic
facial expressions. The proposed approach has won the affect recognition sub-challenge of the third international Audio/Visual Emotion Recognition Challenge (AVEC2013)1
An Overview of Recent Development in Composite Catalysts from Porous Materials for Various Reactions and Processes
Catalysts are important to the chemical industry and environmental remediation due to their effective conversion of one chemical into another. Among them, composite catalysts have attracted continuous attention during the past decades. Nowadays, composite catalysts are being used more and more to meet the practical catalytic performance requirements in the chemical industry of high activity, high selectivity and good stability. In this paper, we reviewed our recent work on development of composite catalysts, mainly focusing on the composite catalysts obtained from porous materials such as zeolites, mesoporous materials, carbon nanotubes (CNT), etc. Six types of porous composite catalysts are discussed, including amorphous oxide modified zeolite composite catalysts, zeolite composites prepared by co-crystallization or overgrowth, hierarchical porous catalysts, host-guest porous composites, inorganic and organic mesoporous composite catalysts, and polymer/CNT composite catalysts
Electronic design automation with graphic processors: a survey
A state-of-the-art review of the existing literature on Graphics Processing Units (GPU)-based EDA computing. Considering the diversity of Very Large Scale Integrated (VLSI) Computer Aided Design (CAD) algorithms, it puts forward a taxonomy of EDA computing patterns that can be used as basic building blocks to construct complex EDA applications
Distributed time, conservative parallel logic simulation on GPUs
Logical simulation is the primary method to verify the correctness of IC designs. However, today’s complex VLSI designs pose ever higher demand for the throughput of logic simulators. In this work, a parallel logic simulator was developed by leveraging the com-puting power of modern graphics processing units (GPUs). To expose more parallelism, we implemented a conservative parallel simulation approach, the CMB algorithm, on NVidia GPUs. The simulation processing is mapped to GPU hardware at the finest granularity. With carefully designed data structures and data flow organizations, our GPU based simulator could overcome many problems that hindered efficient implementations of the CMB algorithm on traditional parallel computers. In order to efficiently use the relatively limited capacity of GPU memory, a novel mem-ory management mechanism was proposed to dynamically allo-cate and recycle GPU memory during simulation. We also intro-duced a CPU/GPU co-processing strategy for the best usage of computing resources. Experimental results showed that our GPU based simulator could outperform a CPU baseline event driven simulator by a factor of 29.2
A feasibility study of 2.5D system integration
Excessive on-chip wire length and fast increasing fabrication cost have been the main factors impairing the effectiveness of monolithic integration of VLSI systems. To address these problems, this paper investigates a die stacking based system integration strategy (2.5D system integration). We performed a series of design case studies and developed layout design tools for stacking chips. Our results show that this new scheme has a potential to outperform its monolithic equivalent